306 research outputs found

    A heuristics approach for computing the largest eigenvalue of a pairwise comparison matrix

    Get PDF
    Pairwise comparison matrices (PCMs) are widely used to capture subjective human judgements, especially in the context of the Analytic Hierarchy Process (AHP). Consistency of judgements is normally computed in AHP context in the form of consistency ratio (CR), which requires estimation of the largest eigenvalue (Lmax) of PCMs. Since many of these alternative methods do not require calculation of eigenvector, Lmax and hence the CR of a PCM cannot be easily estimated. We propose in this paper a simple heuristics for calculating Lmax without any need to use Eigenvector Method (EM). We illustrated the proposed procedure with larger size matrices. Simulation is used to compare the accuracy of the proposed heuristics procedure with actual Lmax for PCMs of various sizes. It has been found that the proposed heuristics is highly accurate, with errors less than 1%. The proposed procedure would avoid biases and help managers to make better decisions. The advantage of the proposed heuristics is that it can be easily calculated with simple calculations without any need for specialised mathematical procedures or software and is independent of the method used to derive priorities from PCMs

    IDENTIFYING VEHICLE ROUTE BASED ON USER ACTIVITY

    Get PDF
    Techniques are presented herein for an algorithmic framework to select the best vehicle route based on user activity. A minimum wireless service level is guaranteed in each leg of the route by accounting for anticipated user activity and localized network overloads estimated from route queries

    Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements

    Full text link
    Emotion evoked by an advertisement plays a key role in influencing brand recall and eventual consumer choices. Automatic ad affect recognition has several useful applications. However, the use of content-based feature representations does not give insights into how affect is modulated by aspects such as the ad scene setting, salient object attributes and their interactions. Neither do such approaches inform us on how humans prioritize visual information for ad understanding. Our work addresses these lacunae by decomposing video content into detected objects, coarse scene structure, object statistics and actively attended objects identified via eye-gaze. We measure the importance of each of these information channels by systematically incorporating related information into ad affect prediction models. Contrary to the popular notion that ad affect hinges on the narrative and the clever use of linguistic and social cues, we find that actively attended objects and the coarse scene structure better encode affective information as compared to individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International Conference on Multimodal Interaction, Boulder, CO, US

    AVEID: Automatic Video System for Measuring Engagement In Dementia

    Get PDF
    Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable. We propose AVEID, a low cost and easy-to-use video-based engagement measurement tool to determine the engagement level of a person with dementia (PwD) during digital interaction. We show that the objective behavioral measures computed via AVEID correlate well with subjective expert impressions for the popular MPES and OME BOS, confirming its viability and effectiveness. Moreover, AVEID measures can be obtained for a variety of engagement designs, thereby facilitating large-scale studies with PwD populations

    Discovering Gender Differences in Facial Emotion Recognition via Implicit Behavioral Cues

    Full text link
    We examine the utility of implicit behavioral cues in the form of EEG brain signals and eye movements for gender recognition (GR) and emotion recognition (ER). Specifically, the examined cues are acquired via low-cost, off-the-shelf sensors. We asked 28 viewers (14 female) to recognize emotions from unoccluded (no mask) as well as partially occluded (eye and mouth masked) emotive faces. Obtained experimental results reveal that (a) reliable GR and ER is achievable with EEG and eye features, (b) differential cognitive processing especially for negative emotions is observed for males and females and (c) some of these cognitive differences manifest under partial face occlusion, as typified by the eye and mouth mask conditions.Comment: To be published in the Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction.201

    Social media and successful retail operations in the hyper-customisation era

    Get PDF
    It's possible to increase both customer satisfaction and profitability, but careful planning is needed, write Usha Ramanathan, Nachiappan (Nachi) Subramanian and Guy Parrot

    Evaluating Content-centric vs User-centric Ad Affect Recognition

    Get PDF
    Despite the fact that advertisements (ads) often include strongly emotional content, very little work has been devoted to affect recognition (AR) from ads. This work explicitly compares content-centric and user-centric ad AR methodologies, and evaluates the impact of enhanced AR on computational advertising via a user study. Specifically, we (1) compile an affective ad dataset capable of evoking coherent emotions across users; (2) explore the efficacy of content-centric convolutional neural network (CNN) features for encoding emotions, and show that CNN features outperform low-level emotion descriptors; (3) examine user-centered ad AR by analyzing Electroencephalogram (EEG) responses acquired from eleven viewers, and find that EEG signals encode emotional information better than content descriptors; (4) investigate the relationship between objective AR and subjective viewer experience while watching an ad-embedded online video stream based on a study involving 12 users. To our knowledge, this is the first work to (a) expressly compare user vs content-centered AR for ads, and (b) study the relationship between modeling of ad emotions and its impact on a real-life advertising application.Comment: Accepted at the ACM International Conference on Multimodal Interation (ICMI) 201
    corecore